Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures
نویسندگان
چکیده
It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution Theta* is o(e(0. 5-epsilon)(Theta*)), where epsilon > 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x --> 0, and e(Theta*) is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e(Theta*) tends to zero.
منابع مشابه
Asymptotic convergence properties of the EM algorithm with respect to the overlap in the mixture
The EM algorithm is generally considered as a linearly convergent algorithm. However, many empirical results show that it can converge significantly faster than those gradient based first-order iterative algorithms, especially when the overlap of densities in a mixture is small. This paper explores this issue theoretically on mixtures of densities from a class of exponential families. We have p...
متن کاملAsymptotic Convergence Properties of EM Type Algorithms
We analyze the asymptotic convergence properties of a general class of EM type algorithms for es timating an unknown parameter via alternating estimation and maximization As examples this class includes ML EM penalized ML EM Green s OSL EM and many other approximate EM al gorithms A theorem is given which provides conditions for monotone convergence with respect to a given norm and speci es an ...
متن کاملConvergence of the EM Algorithm for Gaussian Mixtures with Unbalanced Mixing Coefficients
The speed of convergence of the Expectation Maximization (EM) algorithm for Gaussian mixture model fitting is known to be dependent on the amount of overlap among the mixture components. In this paper, we study the impact of mixing coefficients on the convergence of EM. We show that when the mixture components exhibit some overlap, the convergence of EM becomes slower as the dynamic range among...
متن کاملOn Convergence Properties of the EM Algorithm for Gaussian Mixtures
We build up the mathematical connection between the ”ExpectationMaximization” (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special ...
متن کاملGlobal Convergence of Model Reference Adaptive Search for Gaussian Mixtures
While the Expectation-Maximization (EM) algorithm is a popular and convenient tool for mixture analysis, it only produces solutions that are locally optimal, and thus may not achieve the globally optimal solution. This paper introduces a new algorithm, based on the global optimization algorithm Model Reference Adaptive Search (MRAS), designed to produce globally-optimal solutions in the estimat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neural computation
دوره 12 12 شماره
صفحات -
تاریخ انتشار 2000